视觉(RE)本地化解决了估计已知场景中捕获的查询图像的6-DOF(自由度)摄像头的问题,该镜头是许多计算机视觉和机器人应用程序的关键构建块。基于结构的本地化的最新进展通过记住从图像像素到场景坐标的映射与神经网络的映射来构建相机姿势优化的2D-3D对应关系。但是,这种记忆需要在每个场景中训练大量的图像,这是沉重效率降低的。相反,通常很少的图像足以覆盖场景的主要区域,以便人类操作员执行视觉定位。在本文中,我们提出了一种场景区域分类方法,以实现几乎没有拍摄图像的快速有效的场景记忆。我们的见解是利用a)预测的特征提取器,b)场景区域分类器和c)元学习策略,以加速培训,同时缓解过度拟合。我们在室内和室外基准上评估了我们的方法。该实验验证了我们方法在几次设置中的有效性,并且训练时间大大减少到只有几分钟。代码可用:\ url {https://github.com/siyandong/src}
translated by 谷歌翻译
神经隐式功能对于数据表示非常有效。但是,如果输入数据具有许多细节或含有低频和高频带宽,则神经网络学到的隐式功能通常包括意外的噪声或失去细节。在保留细尺度内容的同时,删除工件具有挑战性,通常会出现过度平滑或嘈杂的问题。为了解决这一难题,我们提出了一个新框架(FINN),该框架(FINN)将过滤模块集成到MLP中以执行数据重建,同时适应包含不同频率的区域。过滤模块的平滑操作员作用于网络的中间结果,鼓励结果是平滑的,并且恢复的操作员将高频带到区域过于光滑。两个反活性操作员在所有MLP层中连续播放,以适应重建。我们证明了Finn在几个任务上的优势,并与最新方法相比,展示了显着改善。此外,Finn在收敛速度和网络稳定性方面还能产生更好的性能。
translated by 谷歌翻译
在本文中,我们介绍了我们在VSPW 2021挑战中使用的解决方案。我们的实验基于两个基线模型,Swin Transformer和MaskFormer。为了进一步提高性能,我们采用随机体重平均技术和设计层次集合策略。不使用任何外部语义分段数据集,我们的解决方案在私人排行榜中排名第5位。此外,我们有一些有趣的尝试解决长尾识别和过度装备的问题,从而实现了Val子集的改进。也许由于分发差异,这些尝试不适用于测试子集。我们还将介绍这些尝试并希望激励其他研究人员。
translated by 谷歌翻译
跨场型模型适应对于在实际场景中的摄像机重新定位至关重要。通常,最好将预学的模型快速适应新颖的场景,并尽可能少地训练样本。但是,由于图像特征提取和场景坐标回归的纠缠,现有的最新方法几乎不能支持如此少的场景适应。为了解决此问题,我们使用解耦的解决方案接近摄像机重新定位,在该解决方案中,分别执行特征提取,坐标回归和姿势估计。我们的关键见解是,应通过删除坐标系的分心因子来学习用于坐标回归的功能编码器,从而从多个场景中学到了特征编码器,以获得一般特征表示和更重要的,不敏感的功能。具有此功能先验,并与坐标回归器结合使用,与现有集成解决方案相比,新场景中几乎没有射击的观测比3D世界更容易。实验表明,与最先进的方法相比,我们的方法的优越性,在具有不同的视觉外观和观点分布的几个场景上产生了更高的精度。
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
We present Second Thought, a new learning paradigm that enables language models (LMs) to re-align with human values. By modeling the chain-of-edits between value-unaligned and value-aligned text, with LM fine-tuning and additional refinement through reinforcement learning, Second Thought not only achieves superior performance in three value alignment benchmark datasets but also shows strong human-value transfer learning ability in few-shot scenarios. The generated editing steps also offer better interpretability and ease for interactive error correction. Extensive human evaluations further confirm its effectiveness.
translated by 谷歌翻译
Transformer, originally devised for natural language processing, has also attested significant success in computer vision. Thanks to its super expressive power, researchers are investigating ways to deploy transformers to reinforcement learning (RL) and the transformer-based models have manifested their potential in representative RL benchmarks. In this paper, we collect and dissect recent advances on transforming RL by transformer (transformer-based RL or TRL), in order to explore its development trajectory and future trend. We group existing developments in two categories: architecture enhancement and trajectory optimization, and examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving. For architecture enhancement, these methods consider how to apply the powerful transformer structure to RL problems under the traditional RL framework, which model agents and environments much more precisely than deep RL methods, but they are still limited by the inherent defects of traditional RL algorithms, such as bootstrapping and "deadly triad". For trajectory optimization, these methods treat RL problems as sequence modeling and train a joint state-action model over entire trajectories under the behavior cloning framework, which are able to extract policies from static datasets and fully use the long-sequence modeling capability of the transformer. Given these advancements, extensions and challenges in TRL are reviewed and proposals about future direction are discussed. We hope that this survey can provide a detailed introduction to TRL and motivate future research in this rapidly developing field.
translated by 谷歌翻译